325 research outputs found
Efficient Implementation of a Synchronous Parallel Push-Relabel Algorithm
Motivated by the observation that FIFO-based push-relabel algorithms are able
to outperform highest label-based variants on modern, large maximum flow
problem instances, we introduce an efficient implementation of the algorithm
that uses coarse-grained parallelism to avoid the problems of existing parallel
approaches. We demonstrate good relative and absolute speedups of our algorithm
on a set of large graph instances taken from real-world applications. On a
modern 40-core machine, our parallel implementation outperforms existing
sequential implementations by up to a factor of 12 and other parallel
implementations by factors of up to 3
Randomized Revenue Monotone Mechanisms for Online Advertising
Online advertising is the main source of revenue for many Internet firms. A
central component of online advertising is the underlying mechanism that
selects and prices the winning ads for a given ad slot. In this paper we study
designing a mechanism for the Combinatorial Auction with Identical Items (CAII)
in which we are interested in selling identical items to a group of bidders
each demanding a certain number of items between and . CAII generalizes
important online advertising scenarios such as image-text and video-pod
auctions [GK14]. In image-text auction we want to fill an advertising slot on a
publisher's web page with either text-ads or a single image-ad and in
video-pod auction we want to fill an advertising break of seconds with
video-ads of possibly different durations.
Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity
(RM). RM is a natural constraint which states that the revenue of a mechanism
should not decrease if the number of participants increases or if a participant
increases her bid.
[GK14] showed that no deterministic RM mechanism can attain PoRM of less than
for CAII, i.e., no deterministic mechanism can attain more than
fraction of the maximum social welfare. [GK14] also design a
mechanism with PoRM of for CAII.
In this paper, we seek to overcome the impossibility result of [GK14] for
deterministic mechanisms by using the power of randomization. We show that by
using randomization, one can attain a constant PoRM. In particular, we design a
randomized RM mechanism with PoRM of for CAII
Node-balancing by edge-increments
Suppose you are given a graph with a weight assignment
and that your objective is to modify using legal
steps such that all vertices will have the same weight, where in each legal
step you are allowed to choose an edge and increment the weights of its end
points by .
In this paper we study several variants of this problem for graphs and
hypergraphs. On the combinatorial side we show connections with fundamental
results from matching theory such as Hall's Theorem and Tutte's Theorem. On the
algorithmic side we study the computational complexity of associated decision
problems.
Our main results are a characterization of the graphs for which any initial
assignment can be balanced by edge-increments and a strongly polynomial-time
algorithm that computes a balancing sequence of increments if one exists.Comment: 10 page
Speeding up shortest path algorithms
Given an arbitrary, non-negatively weighted, directed graph we
present an algorithm that computes all pairs shortest paths in time
, where is the number of
different edges contained in shortest paths and is a running
time of an algorithm to solve a single-source shortest path problem (SSSP).
This is a substantial improvement over a trivial times application of
that runs in . In our algorithm we use
as a black box and hence any improvement on results also in improvement
of our algorithm.
Furthermore, a combination of our method, Johnson's reweighting technique and
topological sorting results in an all-pairs
shortest path algorithm for arbitrarily-weighted directed acyclic graphs.
In addition, we also point out a connection between the complexity of a
certain sorting problem defined on shortest paths and SSSP.Comment: 10 page
Exact bounds for distributed graph colouring
We prove exact bounds on the time complexity of distributed graph colouring.
If we are given a directed path that is properly coloured with colours, by
prior work it is known that we can find a proper 3-colouring in communication rounds. We close the gap between upper and
lower bounds: we show that for infinitely many the time complexity is
precisely communication rounds.Comment: 16 pages, 3 figure
Flow Faster: Efficient Decision Algorithms for Probabilistic Simulations
Strong and weak simulation relations have been proposed for Markov chains,
while strong simulation and strong probabilistic simulation relations have been
proposed for probabilistic automata. However, decision algorithms for strong
and weak simulation over Markov chains, and for strong simulation over
probabilistic automata are not efficient, which makes it as yet unclear whether
they can be used as effectively as their non-probabilistic counterparts. This
paper presents drastically improved algorithms to decide whether some
(discrete- or continuous-time) Markov chain strongly or weakly simulates
another, or whether a probabilistic automaton strongly simulates another. The
key innovation is the use of parametric maximum flow techniques to amortize
computations. We also present a novel algorithm for deciding strong
probabilistic simulation preorders on probabilistic automata, which has
polynomial complexity via a reduction to an LP problem. When extending the
algorithms for probabilistic automata to their continuous-time counterpart, we
retain the same complexity for both strong and strong probabilistic
simulations.Comment: LMC
The supersymmetric interpretation of the EGRET excess of diffuse Galactic gamma rays
Recently it was shown that the excess of diffuse Galactic gamma rays above 1
GeV traces the Dark Matter halo, as proven by reconstructing the peculiar shape
of the rotation curve of our Galaxy from the gamma ray excess. This can be
interpreted as a Dark Matter annihilation signal. In this paper we investigate
if this interpretation is consistent with Supersymmetry. It is found that the
EGRET excess combined with all electroweak constraints is fully consistent with
the minimal mSUGRA model for scalars in the TeV range and gauginos below 500
GeV.Comment: 11 pages, 6 figures, extended version with more figures, as accepted
for publication in Phys. Letters
Popular matchings in the marriage and roommates problems
Popular matchings have recently been a subject of study in the context of the so-called House Allocation Problem, where the objective is to match applicants to houses over which the applicants have preferences. A matching M is called popular if there is no other matching M′ with the property that more applicants prefer their allocation in M′ to their allocation in M. In this paper we study popular matchings in the context of the Roommates Problem, including its special (bipartite) case, the Marriage Problem. We investigate the relationship between popularity and stability, and describe efficient algorithms to test a matching for popularity in these settings. We also show that, when ties are permitted in the preferences, it is NP-hard to determine whether a popular matching exists in both the Roommates and Marriage cases
Separating Hierarchical and General Hub Labelings
In the context of distance oracles, a labeling algorithm computes vertex
labels during preprocessing. An query computes the corresponding distance
from the labels of and only, without looking at the input graph. Hub
labels is a class of labels that has been extensively studied. Performance of
the hub label query depends on the label size. Hierarchical labels are a
natural special kind of hub labels. These labels are related to other problems
and can be computed more efficiently. This brings up a natural question of the
quality of hierarchical labels. We show that there is a gap: optimal
hierarchical labels can be polynomially bigger than the general hub labels. To
prove this result, we give tight upper and lower bounds on the size of
hierarchical and general labels for hypercubes.Comment: 11 pages, minor corrections, MFCS 201
Level-Based Analysis of Genetic Algorithms and Other Search Processes
The fitness-level technique is a simple and old way to derive upper bounds for the expected runtime of simple elitist evolutionary algorithms (EAs). Recently, the technique has been adapted to deduce the runtime of algorithms with non-elitist populations and unary variation operators [2,8]. In this paper, we show that the restriction to unary variation operators can be removed. This gives rise to a much more general analytical tool which is applicable to a wide range of search processes. As introductory examples, we provide simple runtime analyses of many variants of the Genetic Algorithm on well-known benchmark functions, such as OneMax, LeadingOnes, and the sorting problem
- …